Step 1 - Libraries Installation

The application that we will use will perform a summary of the text that you will introduce by sing the BART-Model and the generation of the Images will be produced by Dalle-Mini.

  • Dalle mini is a free, open-source AI that produces amazing images from text inputs.

  • The BART Model with a language modeling head. Can be used for summarization. This model inherits from PreTrainedModel.

!nvidia-smi
!pip install min-dalle
!pip install gradio -q
!pip install transformers torch requests moviepy huggingface_hub opencv-python
!pip install moviepy
!pip install imageio-ffmpeg
!pip install imageio==2.4.1
!apt install imagemagick
!cat /etc/ImageMagick-6/policy.xml | sed 's/none/read,write/g'> /etc/ImageMagick-6/policy.xml
!pip install mutagen
!pip install gtts
#We reset the runtime
exit()
Thu Sep 15 08:49:56 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   58C    P8    10W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting min-dalle
  Downloading min-dalle-0.4.11.tar.gz (10 kB)
Requirement already satisfied: torch>=1.11 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (1.12.1+cu113)
Requirement already satisfied: typing_extensions>=4.1 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (4.1.1)
Requirement already satisfied: numpy>=1.21 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (1.21.6)
Requirement already satisfied: pillow>=7.1 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (7.1.2)
Requirement already satisfied: requests>=2.23 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (2.23.0)
Collecting emoji
  Downloading emoji-2.0.0.tar.gz (197 kB)
     |████████████████████████████████| 197 kB 33.6 MB/s 
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.23->min-dalle) (2022.6.15)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.23->min-dalle) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.23->min-dalle) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.23->min-dalle) (3.0.4)
Building wheels for collected packages: min-dalle, emoji
  Building wheel for min-dalle (setup.py) ... done
  Created wheel for min-dalle: filename=min_dalle-0.4.11-py3-none-any.whl size=10539 sha256=ba51393e127d5c830c0fc14e661b8353bc891c436f60ab4179dce9f88730b665
  Stored in directory: /root/.cache/pip/wheels/99/f1/33/770cd6855504c51f1456d0dceae9ecc5a63ee9da6b799a63cf
  Building wheel for emoji (setup.py) ... done
  Created wheel for emoji: filename=emoji-2.0.0-py3-none-any.whl size=193022 sha256=21560d9ae0528ec7bd63008b019579241973e656380414bb2fbb62a8d73e12e6
  Stored in directory: /root/.cache/pip/wheels/ec/29/4d/3cfe7452ac7d8d83b1930f8a6205c3c9649b24e80f9029fc38
Successfully built min-dalle emoji
Installing collected packages: emoji, min-dalle
Successfully installed emoji-2.0.0 min-dalle-0.4.11
     |████████████████████████████████| 6.1 MB 12.7 MB/s 
     |████████████████████████████████| 212 kB 47.8 MB/s 
     |████████████████████████████████| 270 kB 17.6 MB/s 
     |████████████████████████████████| 57 kB 4.3 MB/s 
     |████████████████████████████████| 2.3 MB 42.4 MB/s 
     |████████████████████████████████| 84 kB 1.6 MB/s 
     |████████████████████████████████| 84 kB 2.7 MB/s 
     |████████████████████████████████| 54 kB 1.0 MB/s 
     |████████████████████████████████| 112 kB 49.1 MB/s 
     |████████████████████████████████| 55 kB 3.4 MB/s 
     |████████████████████████████████| 63 kB 1.2 MB/s 
     |████████████████████████████████| 80 kB 3.7 MB/s 
     |████████████████████████████████| 68 kB 6.5 MB/s 
     |████████████████████████████████| 43 kB 2.4 MB/s 
     |████████████████████████████████| 594 kB 65.5 MB/s 
     |████████████████████████████████| 856 kB 57.0 MB/s 
     |████████████████████████████████| 4.0 MB 57.3 MB/s 
  Building wheel for ffmpy (setup.py) ... done
  Building wheel for python-multipart (setup.py) ... done
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting transformers
  Downloading transformers-4.22.0-py3-none-any.whl (4.9 MB)
     |████████████████████████████████| 4.9 MB 36.5 MB/s 
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (1.12.1+cu113)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (2.23.0)
Requirement already satisfied: moviepy in /usr/local/lib/python3.7/dist-packages (0.2.3.5)
Collecting huggingface_hub
  Downloading huggingface_hub-0.9.1-py3-none-any.whl (120 kB)
     |████████████████████████████████| 120 kB 75.9 MB/s 
Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (4.6.0.66)
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.8.0)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.64.1)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2022.6.2)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.7/dist-packages (from transformers) (6.0)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.21.6)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.12.0)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (21.3)
Collecting tokenizers!=0.11.3,<0.13,>=0.11.1
  Downloading tokenizers-0.12.1-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (6.6 MB)
     |████████████████████████████████| 6.6 MB 54.0 MB/s 
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface_hub) (4.1.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers) (3.0.9)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests) (2022.6.15)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests) (2.10)
Requirement already satisfied: decorator<5.0,>=4.0.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (4.4.2)
Requirement already satisfied: imageio<3.0,>=2.1.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (2.9.0)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from imageio<3.0,>=2.1.2->moviepy) (7.1.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.8.1)
Installing collected packages: tokenizers, huggingface-hub, transformers
Successfully installed huggingface-hub-0.9.1 tokenizers-0.12.1 transformers-4.22.0
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: moviepy in /usr/local/lib/python3.7/dist-packages (0.2.3.5)
Requirement already satisfied: tqdm<5.0,>=4.11.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (4.64.1)
Requirement already satisfied: imageio<3.0,>=2.1.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (2.9.0)
Requirement already satisfied: decorator<5.0,>=4.0.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (4.4.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from moviepy) (1.21.6)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from imageio<3.0,>=2.1.2->moviepy) (7.1.2)
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting imageio-ffmpeg
  Downloading imageio_ffmpeg-0.4.7-py3-none-manylinux2010_x86_64.whl (26.9 MB)
     |████████████████████████████████| 26.9 MB 1.3 MB/s 
Installing collected packages: imageio-ffmpeg
Successfully installed imageio-ffmpeg-0.4.7
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting imageio==2.4.1
  Downloading imageio-2.4.1.tar.gz (3.3 MB)
     |████████████████████████████████| 3.3 MB 23.8 MB/s 
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from imageio==2.4.1) (1.21.6)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from imageio==2.4.1) (7.1.2)
Building wheels for collected packages: imageio
  Building wheel for imageio (setup.py) ... done
  Created wheel for imageio: filename=imageio-2.4.1-py3-none-any.whl size=3303885 sha256=8d9ed2da5df687be7cb6640ecf3f63471bc3c97e84cd354f5214e8cd4964ca17
  Stored in directory: /root/.cache/pip/wheels/46/20/07/7bb9c8c44e6ec2efa60fd0e6280094f53f65f41767ef69a5ee
Successfully built imageio
Installing collected packages: imageio
  Attempting uninstall: imageio
    Found existing installation: imageio 2.9.0
    Uninstalling imageio-2.9.0:
      Successfully uninstalled imageio-2.9.0
Successfully installed imageio-2.4.1
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following package was automatically installed and is no longer required:
  libnvidia-common-460
Use 'apt autoremove' to remove it.
The following additional packages will be installed:
  fonts-droid-fallback fonts-noto-mono ghostscript gsfonts
  imagemagick-6-common imagemagick-6.q16 libcupsfilters1 libcupsimage2
  libdjvulibre-text libdjvulibre21 libgs9 libgs9-common libijs-0.35
  libjbig2dec0 liblqr-1-0 libmagickcore-6.q16-3 libmagickcore-6.q16-3-extra
  libmagickwand-6.q16-3 libnetpbm10 libwmf0.2-7 netpbm poppler-data
Suggested packages:
  fonts-noto ghostscript-x imagemagick-doc autotrace cups-bsd | lpr | lprng
  enscript gimp gnuplot grads hp2xx html2ps libwmf-bin mplayer povray radiance
  sane-utils texlive-base-bin transfig ufraw-batch inkscape libjxr-tools
  libwmf0.2-7-gtk poppler-utils fonts-japanese-mincho | fonts-ipafont-mincho
  fonts-japanese-gothic | fonts-ipafont-gothic fonts-arphic-ukai
  fonts-arphic-uming fonts-nanum
The following NEW packages will be installed:
  fonts-droid-fallback fonts-noto-mono ghostscript gsfonts imagemagick
  imagemagick-6-common imagemagick-6.q16 libcupsfilters1 libcupsimage2
  libdjvulibre-text libdjvulibre21 libgs9 libgs9-common libijs-0.35
  libjbig2dec0 liblqr-1-0 libmagickcore-6.q16-3 libmagickcore-6.q16-3-extra
  libmagickwand-6.q16-3 libnetpbm10 libwmf0.2-7 netpbm poppler-data
0 upgraded, 23 newly installed, 0 to remove and 32 not upgraded.
Need to get 18.4 MB of archives.
After this operation, 66.3 MB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu bionic/main amd64 fonts-droid-fallback all 1:6.0.1r16-1.1 [1,805 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/main amd64 liblqr-1-0 amd64 0.4.2-2.1 [27.7 kB]
Get:3 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 imagemagick-6-common all 8:6.9.7.4+dfsg-16ubuntu6.13 [60.3 kB]
Get:4 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmagickcore-6.q16-3 amd64 8:6.9.7.4+dfsg-16ubuntu6.13 [1,620 kB]
Get:5 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmagickwand-6.q16-3 amd64 8:6.9.7.4+dfsg-16ubuntu6.13 [292 kB]
Get:6 http://archive.ubuntu.com/ubuntu bionic/main amd64 poppler-data all 0.4.8-2 [1,479 kB]
Get:7 http://archive.ubuntu.com/ubuntu bionic/main amd64 fonts-noto-mono all 20171026-2 [75.5 kB]
Get:8 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcupsimage2 amd64 2.2.7-1ubuntu2.9 [18.6 kB]
Get:9 http://archive.ubuntu.com/ubuntu bionic/main amd64 libijs-0.35 amd64 0.35-13 [15.5 kB]
Get:10 http://archive.ubuntu.com/ubuntu bionic/main amd64 libjbig2dec0 amd64 0.13-6 [55.9 kB]
Get:11 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgs9-common all 9.26~dfsg+0-0ubuntu0.18.04.16 [5,093 kB]
Get:12 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libgs9 amd64 9.26~dfsg+0-0ubuntu0.18.04.16 [2,265 kB]
Get:13 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 ghostscript amd64 9.26~dfsg+0-0ubuntu0.18.04.16 [51.3 kB]
Get:14 http://archive.ubuntu.com/ubuntu bionic/main amd64 gsfonts all 1:8.11+urwcyr1.0.7~pre44-4.4 [3,120 kB]
Get:15 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 imagemagick-6.q16 amd64 8:6.9.7.4+dfsg-16ubuntu6.13 [423 kB]
Get:16 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 imagemagick amd64 8:6.9.7.4+dfsg-16ubuntu6.13 [14.2 kB]
Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libcupsfilters1 amd64 1.20.2-0ubuntu3.1 [108 kB]
Get:18 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libdjvulibre-text all 3.5.27.1-8ubuntu0.4 [49.4 kB]
Get:19 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libdjvulibre21 amd64 3.5.27.1-8ubuntu0.4 [561 kB]
Get:20 http://archive.ubuntu.com/ubuntu bionic/main amd64 libwmf0.2-7 amd64 0.2.8.4-12 [150 kB]
Get:21 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 libmagickcore-6.q16-3-extra amd64 8:6.9.7.4+dfsg-16ubuntu6.13 [62.3 kB]
Get:22 http://archive.ubuntu.com/ubuntu bionic/main amd64 libnetpbm10 amd64 2:10.0-15.3build1 [58.0 kB]
Get:23 http://archive.ubuntu.com/ubuntu bionic/main amd64 netpbm amd64 2:10.0-15.3build1 [1,017 kB]
Fetched 18.4 MB in 3s (5,807 kB/s)
Selecting previously unselected package fonts-droid-fallback.
(Reading database ... 155685 files and directories currently installed.)
Preparing to unpack .../00-fonts-droid-fallback_1%3a6.0.1r16-1.1_all.deb ...
Unpacking fonts-droid-fallback (1:6.0.1r16-1.1) ...
Selecting previously unselected package liblqr-1-0:amd64.
Preparing to unpack .../01-liblqr-1-0_0.4.2-2.1_amd64.deb ...
Unpacking liblqr-1-0:amd64 (0.4.2-2.1) ...
Selecting previously unselected package imagemagick-6-common.
Preparing to unpack .../02-imagemagick-6-common_8%3a6.9.7.4+dfsg-16ubuntu6.13_all.deb ...
Unpacking imagemagick-6-common (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Selecting previously unselected package libmagickcore-6.q16-3:amd64.
Preparing to unpack .../03-libmagickcore-6.q16-3_8%3a6.9.7.4+dfsg-16ubuntu6.13_amd64.deb ...
Unpacking libmagickcore-6.q16-3:amd64 (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Selecting previously unselected package libmagickwand-6.q16-3:amd64.
Preparing to unpack .../04-libmagickwand-6.q16-3_8%3a6.9.7.4+dfsg-16ubuntu6.13_amd64.deb ...
Unpacking libmagickwand-6.q16-3:amd64 (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Selecting previously unselected package poppler-data.
Preparing to unpack .../05-poppler-data_0.4.8-2_all.deb ...
Unpacking poppler-data (0.4.8-2) ...
Selecting previously unselected package fonts-noto-mono.
Preparing to unpack .../06-fonts-noto-mono_20171026-2_all.deb ...
Unpacking fonts-noto-mono (20171026-2) ...
Selecting previously unselected package libcupsimage2:amd64.
Preparing to unpack .../07-libcupsimage2_2.2.7-1ubuntu2.9_amd64.deb ...
Unpacking libcupsimage2:amd64 (2.2.7-1ubuntu2.9) ...
Selecting previously unselected package libijs-0.35:amd64.
Preparing to unpack .../08-libijs-0.35_0.35-13_amd64.deb ...
Unpacking libijs-0.35:amd64 (0.35-13) ...
Selecting previously unselected package libjbig2dec0:amd64.
Preparing to unpack .../09-libjbig2dec0_0.13-6_amd64.deb ...
Unpacking libjbig2dec0:amd64 (0.13-6) ...
Selecting previously unselected package libgs9-common.
Preparing to unpack .../10-libgs9-common_9.26~dfsg+0-0ubuntu0.18.04.16_all.deb ...
Unpacking libgs9-common (9.26~dfsg+0-0ubuntu0.18.04.16) ...
Selecting previously unselected package libgs9:amd64.
Preparing to unpack .../11-libgs9_9.26~dfsg+0-0ubuntu0.18.04.16_amd64.deb ...
Unpacking libgs9:amd64 (9.26~dfsg+0-0ubuntu0.18.04.16) ...
Selecting previously unselected package ghostscript.
Preparing to unpack .../12-ghostscript_9.26~dfsg+0-0ubuntu0.18.04.16_amd64.deb ...
Unpacking ghostscript (9.26~dfsg+0-0ubuntu0.18.04.16) ...
Selecting previously unselected package gsfonts.
Preparing to unpack .../13-gsfonts_1%3a8.11+urwcyr1.0.7~pre44-4.4_all.deb ...
Unpacking gsfonts (1:8.11+urwcyr1.0.7~pre44-4.4) ...
Selecting previously unselected package imagemagick-6.q16.
Preparing to unpack .../14-imagemagick-6.q16_8%3a6.9.7.4+dfsg-16ubuntu6.13_amd64.deb ...
Unpacking imagemagick-6.q16 (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Selecting previously unselected package imagemagick.
Preparing to unpack .../15-imagemagick_8%3a6.9.7.4+dfsg-16ubuntu6.13_amd64.deb ...
Unpacking imagemagick (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Selecting previously unselected package libcupsfilters1:amd64.
Preparing to unpack .../16-libcupsfilters1_1.20.2-0ubuntu3.1_amd64.deb ...
Unpacking libcupsfilters1:amd64 (1.20.2-0ubuntu3.1) ...
Selecting previously unselected package libdjvulibre-text.
Preparing to unpack .../17-libdjvulibre-text_3.5.27.1-8ubuntu0.4_all.deb ...
Unpacking libdjvulibre-text (3.5.27.1-8ubuntu0.4) ...
Selecting previously unselected package libdjvulibre21:amd64.
Preparing to unpack .../18-libdjvulibre21_3.5.27.1-8ubuntu0.4_amd64.deb ...
Unpacking libdjvulibre21:amd64 (3.5.27.1-8ubuntu0.4) ...
Selecting previously unselected package libwmf0.2-7:amd64.
Preparing to unpack .../19-libwmf0.2-7_0.2.8.4-12_amd64.deb ...
Unpacking libwmf0.2-7:amd64 (0.2.8.4-12) ...
Selecting previously unselected package libmagickcore-6.q16-3-extra:amd64.
Preparing to unpack .../20-libmagickcore-6.q16-3-extra_8%3a6.9.7.4+dfsg-16ubuntu6.13_amd64.deb ...
Unpacking libmagickcore-6.q16-3-extra:amd64 (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Selecting previously unselected package libnetpbm10.
Preparing to unpack .../21-libnetpbm10_2%3a10.0-15.3build1_amd64.deb ...
Unpacking libnetpbm10 (2:10.0-15.3build1) ...
Selecting previously unselected package netpbm.
Preparing to unpack .../22-netpbm_2%3a10.0-15.3build1_amd64.deb ...
Unpacking netpbm (2:10.0-15.3build1) ...
Setting up libgs9-common (9.26~dfsg+0-0ubuntu0.18.04.16) ...
Setting up imagemagick-6-common (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Setting up fonts-droid-fallback (1:6.0.1r16-1.1) ...
Setting up gsfonts (1:8.11+urwcyr1.0.7~pre44-4.4) ...
Setting up poppler-data (0.4.8-2) ...
Setting up libdjvulibre-text (3.5.27.1-8ubuntu0.4) ...
Setting up libnetpbm10 (2:10.0-15.3build1) ...
Setting up fonts-noto-mono (20171026-2) ...
Setting up libcupsfilters1:amd64 (1.20.2-0ubuntu3.1) ...
Setting up libcupsimage2:amd64 (2.2.7-1ubuntu2.9) ...
Setting up liblqr-1-0:amd64 (0.4.2-2.1) ...
Setting up libjbig2dec0:amd64 (0.13-6) ...
Setting up libijs-0.35:amd64 (0.35-13) ...
Setting up netpbm (2:10.0-15.3build1) ...
Setting up libgs9:amd64 (9.26~dfsg+0-0ubuntu0.18.04.16) ...
Setting up libwmf0.2-7:amd64 (0.2.8.4-12) ...
Setting up libmagickcore-6.q16-3:amd64 (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Setting up libdjvulibre21:amd64 (3.5.27.1-8ubuntu0.4) ...
Setting up ghostscript (9.26~dfsg+0-0ubuntu0.18.04.16) ...
Setting up libmagickwand-6.q16-3:amd64 (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Setting up imagemagick-6.q16 (8:6.9.7.4+dfsg-16ubuntu6.13) ...
update-alternatives: using /usr/bin/compare-im6.q16 to provide /usr/bin/compare (compare) in auto mode
update-alternatives: using /usr/bin/compare-im6.q16 to provide /usr/bin/compare-im6 (compare-im6) in auto mode
update-alternatives: using /usr/bin/animate-im6.q16 to provide /usr/bin/animate (animate) in auto mode
update-alternatives: using /usr/bin/animate-im6.q16 to provide /usr/bin/animate-im6 (animate-im6) in auto mode
update-alternatives: using /usr/bin/convert-im6.q16 to provide /usr/bin/convert (convert) in auto mode
update-alternatives: using /usr/bin/convert-im6.q16 to provide /usr/bin/convert-im6 (convert-im6) in auto mode
update-alternatives: using /usr/bin/composite-im6.q16 to provide /usr/bin/composite (composite) in auto mode
update-alternatives: using /usr/bin/composite-im6.q16 to provide /usr/bin/composite-im6 (composite-im6) in auto mode
update-alternatives: using /usr/bin/conjure-im6.q16 to provide /usr/bin/conjure (conjure) in auto mode
update-alternatives: using /usr/bin/conjure-im6.q16 to provide /usr/bin/conjure-im6 (conjure-im6) in auto mode
update-alternatives: using /usr/bin/import-im6.q16 to provide /usr/bin/import (import) in auto mode
update-alternatives: using /usr/bin/import-im6.q16 to provide /usr/bin/import-im6 (import-im6) in auto mode
update-alternatives: using /usr/bin/identify-im6.q16 to provide /usr/bin/identify (identify) in auto mode
update-alternatives: using /usr/bin/identify-im6.q16 to provide /usr/bin/identify-im6 (identify-im6) in auto mode
update-alternatives: using /usr/bin/stream-im6.q16 to provide /usr/bin/stream (stream) in auto mode
update-alternatives: using /usr/bin/stream-im6.q16 to provide /usr/bin/stream-im6 (stream-im6) in auto mode
update-alternatives: using /usr/bin/display-im6.q16 to provide /usr/bin/display (display) in auto mode
update-alternatives: using /usr/bin/display-im6.q16 to provide /usr/bin/display-im6 (display-im6) in auto mode
update-alternatives: using /usr/bin/montage-im6.q16 to provide /usr/bin/montage (montage) in auto mode
update-alternatives: using /usr/bin/montage-im6.q16 to provide /usr/bin/montage-im6 (montage-im6) in auto mode
update-alternatives: using /usr/bin/mogrify-im6.q16 to provide /usr/bin/mogrify (mogrify) in auto mode
update-alternatives: using /usr/bin/mogrify-im6.q16 to provide /usr/bin/mogrify-im6 (mogrify-im6) in auto mode
Setting up libmagickcore-6.q16-3-extra:amd64 (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Setting up imagemagick (8:6.9.7.4+dfsg-16ubuntu6.13) ...
Processing triggers for hicolor-icon-theme (0.17-2) ...
Processing triggers for fontconfig (2.12.6-0ubuntu2) ...
Processing triggers for mime-support (3.60ubuntu1) ...
Processing triggers for libc-bin (2.27-3ubuntu1.5) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting mutagen
  Downloading mutagen-1.45.1-py3-none-any.whl (218 kB)
     |████████████████████████████████| 218 kB 27.2 MB/s 
Installing collected packages: mutagen
Successfully installed mutagen-1.45.1
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting gtts
  Downloading gTTS-2.2.4-py3-none-any.whl (26 kB)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from gtts) (2.23.0)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gtts) (1.15.0)
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from gtts) (7.1.2)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->gtts) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->gtts) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->gtts) (2022.6.15)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->gtts) (3.0.4)
Installing collected packages: gtts
Successfully installed gtts-2.2.4

Step 2 - Importing Libraries

from moviepy.editor import *
from PIL import Image
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM,pipeline
import requests
import gradio as gr
import torch
import re
import os
import sys
from huggingface_hub import snapshot_download
import base64
import io
import cv2
Imageio: 'ffmpeg-linux64-v3.3.1' was not found on your computer; downloading it now.
Try 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg-linux64-v3.3.1 (43.8 MB)
Downloading: 8192/45929032 bytes (0.0%)4186112/45929032 bytes (9.1%)8331264/45929032 bytes (18.1%)12468224/45929032 bytes (27.1%)16596992/45929032 bytes (36.1%)20766720/45929032 bytes (45.2%)24698880/45929032 bytes (53.8%)28876800/45929032 bytes (62.9%)32899072/45929032 bytes (71.6%)36921344/45929032 bytes (80.4%)41263104/45929032 bytes (89.8%)45449216/45929032 bytes (99.0%)45929032/45929032 bytes (100.0%)
  Done
File saved as /root/.imageio/ffmpeg/ffmpeg-linux64-v3.3.1.
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 0 files to the new cache system

Step 3- Creation of the application

In this part we need to write the text that we want to create our video story

text ='Once, there was a girl called Ella who went to the supermarket to buy the ingredients to make a cake. Because today is her birthday and her friends come to her house and help her to prepare the cake.'
print(text)
Once, there was a girl called Ella who went to the supermarket to buy the ingredients to make a cake. Because today is her birthday and her friends come to her house and help her to prepare the cake.

and we need to load our summary AI program

tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-cnn-12-6")
model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distilbart-cnn-12-6")
tokenizer
PreTrainedTokenizerFast(name_or_path='sshleifer/distilbart-cnn-12-6', vocab_size=50265, model_max_len=1024, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'sep_token': AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'cls_token': AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=True), 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=True)})
model
BartForConditionalGeneration(
  (model): BartModel(
    (shared): Embedding(50264, 1024, padding_idx=1)
    (encoder): BartEncoder(
      (embed_tokens): Embedding(50264, 1024, padding_idx=1)
      (embed_positions): BartLearnedPositionalEmbedding(1026, 1024)
      (layers): ModuleList(
        (0): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (1): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (2): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (3): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (4): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (5): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (6): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (7): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (8): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (9): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (10): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (11): BartEncoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (activation_fn): GELUActivation()
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
      )
      (layernorm_embedding): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
    )
    (decoder): BartDecoder(
      (embed_tokens): Embedding(50264, 1024, padding_idx=1)
      (embed_positions): BartLearnedPositionalEmbedding(1026, 1024)
      (layers): ModuleList(
        (0): BartDecoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (activation_fn): GELUActivation()
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (encoder_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (1): BartDecoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (activation_fn): GELUActivation()
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (encoder_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (2): BartDecoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (activation_fn): GELUActivation()
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (encoder_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (3): BartDecoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (activation_fn): GELUActivation()
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (encoder_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (4): BartDecoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (activation_fn): GELUActivation()
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (encoder_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
        (5): BartDecoderLayer(
          (self_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (activation_fn): GELUActivation()
          (self_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (encoder_attn): BartAttention(
            (k_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (v_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (q_proj): Linear(in_features=1024, out_features=1024, bias=True)
            (out_proj): Linear(in_features=1024, out_features=1024, bias=True)
          )
          (encoder_attn_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
          (fc1): Linear(in_features=1024, out_features=4096, bias=True)
          (fc2): Linear(in_features=4096, out_features=1024, bias=True)
          (final_layer_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
        )
      )
      (layernorm_embedding): LayerNorm((1024,), eps=1e-05, elementwise_affine=True)
    )
  )
  (lm_head): Linear(in_features=1024, out_features=50264, bias=False)
)

Now that have loaded our pretrained ML model to summayse our text we can get the summary

inputs = tokenizer(text, 
                  max_length=1024, 
                  truncation=True,
                  return_tensors="pt")
  
summary_ids = model.generate(inputs["input_ids"])
summary = tokenizer.batch_decode(summary_ids, 
                                skip_special_tokens=True, 
                                clean_up_tokenization_spaces=False)
plot = list(summary[0].split('.'))
WARNING:py.warnings:/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py:1232: UserWarning: Neither `max_length` nor `max_new_tokens` has been set, `max_length` will default to 142 (`self.config.max_length`). Controlling `max_length` via the config is deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
  UserWarning,

text
'Once, there was a girl called Ella who went to the supermarket to buy the ingredients to make a cake. Because today is her birthday and her friends come to her house and help her to prepare the cake.'
plot
[' Ella went to the supermarket to buy the ingredients to make a cake ',
 ' Today is her birthday and her friends come to her house and help her to prepare the cake ',
 " Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday ",
 '']

Now we create the main program that generate the images

import argparse
import os
from PIL import Image
from min_dalle import MinDalle
import torch

def save_image(image: Image.Image, path: str):
    if os.path.isdir(path):
        path = os.path.join(path, 'generated.png')
    elif not path.endswith('.png'):
        path += '.png'
    print("saving image to", path)
    image.save(path)
    return image

def generate_image(
    is_mega: bool,
    text: str,
    seed: int,
    grid_size: int,
    top_k: int,
    image_path: str,
    models_root: str,
    fp16: bool,
):
    model = MinDalle(
        is_mega=is_mega, 
        models_root=models_root,
        is_reusable=False,
        is_verbose=True,
        dtype=torch.float16 if fp16 else torch.float32
    )

    image = model.generate_image(
        text, 
        seed, 
        grid_size, 
        top_k=top_k, 
        is_verbose=True
    )
    save_image(image, image_path)
    im = Image.open("generated.png")
    return im

Let us generate the images from our summary text

generated_images = []
for senten in plot[:-1]:
  print(senten)
  image=generate_image(
    is_mega='store_true',
    text=senten,
    seed=1,
    grid_size=1,
    top_k=256,
    image_path='generated',
    models_root='pretrained',
    fp16=256,)
  display(image)
  generated_images.append(image)
 Ella went to the supermarket to buy the ingredients to make a cake 
using device cuda
downloading tokenizer params
intializing TextTokenizer
tokenizing text
['Ġella']
['Ġwent']
['Ġto']
['Ġthe']
['Ġsupermarket']
['Ġto']
['Ġbuy']
['Ġthe']
['Ġingredients']
['Ġto']
['Ġmake']
['Ġa']
['Ġcake']
15 text tokens [0, 16871, 8398, 123, 99, 12553, 123, 403, 99, 13241, 123, 1077, 58, 2354, 2]
downloading encoder params
initializing DalleBartEncoder
encoding text tokens
downloading decoder params
initializing DalleBartDecoder
downloading detokenizer params
initializing VQGanDetokenizer
detokenizing image
saving image to generated.png
 Today is her birthday and her friends come to her house and help her to prepare the cake 
using device cuda
intializing TextTokenizer
tokenizing text
['Ġtoday']
['Ġis']
['Ġher']
['Ġbirthday']
['Ġand']
['Ġher']
['Ġfriends']
['Ġcome']
['Ġto']
['Ġher']
['Ġhouse']
['Ġand']
['Ġhelp']
['Ġher']
['Ġto']
['Ġprepare']
['Ġthe']
['Ġcake']
20 text tokens [0, 1535, 231, 447, 1249, 128, 447, 3103, 4118, 123, 447, 610, 128, 1980, 447, 123, 11147, 99, 2354, 2]
initializing DalleBartEncoder
encoding text tokens
initializing DalleBartDecoder
initializing VQGanDetokenizer
detokenizing image
saving image to generated.png
 Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday 
using device cuda
intializing TextTokenizer
tokenizing text
['Ġella', "'s"]
['Ġfriends']
['Ġhelp']
['Ġher']
['Ġprepare']
['Ġher']
['Ġbirthday']
['Ġcake']
['Ġand']
['Ġhelp']
['Ġprepare']
['Ġit']
['Ġfor']
['Ġher']
['Ġfriends', "'"]
['Ġbirthday']
20 text tokens [0, 16871, 168, 3103, 1980, 447, 11147, 447, 1249, 2354, 128, 1980, 11147, 353, 129, 447, 3103, 9, 1249, 2]
initializing DalleBartEncoder
encoding text tokens
initializing DalleBartDecoder
initializing VQGanDetokenizer
detokenizing image
saving image to generated.png
for senten in plot[:-1]:
  print(senten)
 Ella went to the supermarket to buy the ingredients to make a cake 
 Today is her birthday and her friends come to her house and help her to prepare the cake 
 Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday 
sentences =plot[:-1]
num_sentences=len(sentences)
assert len(generated_images) == len(sentences) , print('Something is wrong')
for k in range(len(generated_images)):
    display(generated_images[k])
    print(sentences[k])
 Ella went to the supermarket to buy the ingredients to make a cake 
 Today is her birthday and her friends come to her house and help her to prepare the cake 
 Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday 

Step 4 - Creation of the subtitles

There are two ways to create the subtitles

  • With Spacy
  • WIth NLTK

By default, spaCy uses its dependency parser to do sentence segmentation, which requires loading a statistical model. The sentencizer is a rule-based sentence segmenter that you can use to define your own sentence segmentation rules without loading a model.

import spacy
nlp = spacy.load('en_core_web_sm') # or whatever model you have installed
raw_text = sentences[1]
doc = nlp(raw_text)
subtitles = [sent.text.strip() for sent in doc.sents]
subtitles
['Today is her birthday and her friends come to her house and help her to prepare the cake']
import spacy
nlp = spacy.load('en_core_web_sm') # or whatever model you have installed
c = 0
sub_names = []
for k in range(len(generated_images)): 
  raw_text = sentences[k]
  doc = nlp(raw_text)
  subtitles = [sent.text.strip() for sent in doc.sents]
  sub_names.append(subtitles)
  print(raw_text,subtitles, len(subtitles))
 Ella went to the supermarket to buy the ingredients to make a cake  ['Ella went to the supermarket to buy the ingredients to make a cake'] 1
 Today is her birthday and her friends come to her house and help her to prepare the cake  ['Today is her birthday and her friends come to her house and help her to prepare the cake'] 1
 Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday  ['', "Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday"] 2

NLTK Tokenizer Package Tokenizers divide strings into lists of substrings. For example, If we want to Return a sentence-tokenized copy of text, using NLTK’s recommended sentence tokenizer

import nltk
nltk.download('punkt')
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data]   Unzipping tokenizers/punkt.zip.
True
from nltk import tokenize
subtitles=tokenize.sent_tokenize(sentences[1], language='english')
print(sentences[1],subtitles)
 Today is her birthday and her friends come to her house and help her to prepare the cake  [' Today is her birthday and her friends come to her house and help her to prepare the cake']

We can generate our list of subtitles

from nltk import tokenize
c = 0
sub_names = []
for k in range(len(generated_images)): 
  subtitles=tokenize.sent_tokenize(sentences[k])
  sub_names.append(subtitles)
  print(sentences[k],subtitles, len(subtitles))
 Ella went to the supermarket to buy the ingredients to make a cake  [' Ella went to the supermarket to buy the ingredients to make a cake'] 1
 Today is her birthday and her friends come to her house and help her to prepare the cake  [' Today is her birthday and her friends come to her house and help her to prepare the cake'] 1
 Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday  [" Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday"] 1
sub_names[2]
[" Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday"]

Step 5 - Adding Subtitles to the Images

from PIL import ImageDraw
# copying image to another image object
image = generated_images[2].copy()
add_subtitle=sub_names[2][0]
display(image)
print(add_subtitle)
 Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday
ImageDraw.Draw(
    image  # Image
).text(
    (0, 0),  # Coordinates
    add_subtitle,  # Text
    (0, 0, 0)  # Color
)
display(image)

We need a program to determine the size of text, a simple alternative can be

!ls  /usr/share/fonts/truetype/liberation
LiberationMono-BoldItalic.ttf	     LiberationSansNarrow-Bold.ttf
LiberationMono-Bold.ttf		     LiberationSansNarrow-Italic.ttf
LiberationMono-Italic.ttf	     LiberationSansNarrow-Regular.ttf
LiberationMono-Regular.ttf	     LiberationSans-Regular.ttf
LiberationSans-BoldItalic.ttf	     LiberationSerif-BoldItalic.ttf
LiberationSans-Bold.ttf		     LiberationSerif-Bold.ttf
LiberationSans-Italic.ttf	     LiberationSerif-Italic.ttf
LiberationSansNarrow-BoldItalic.ttf  LiberationSerif-Regular.ttf
from PIL import ImageFont, ImageDraw, Image
#image = Image.open('test.jpg')
image = generated_images[2].copy()
draw = ImageDraw.Draw(image)
txt = sub_names[2][0]
fontsize = 1  # starting font size
W, H = image.size
# portion of image width you want text width to be
blank = Image.new('RGB',(256, 256))

#font = ImageFont.truetype("KeepCalm-Medium.ttf", fontsize)
path_font ="/usr/share/fonts/truetype/liberation/LiberationSans-Bold.ttf"
# use a truetype font
#font = ImageFont.truetype("arial.ttf", fontsize)
font = ImageFont.truetype(path_font, fontsize)
print(image.size)
print(blank.size)
while (font.getsize(txt)[0] < blank.size[0]) and (font.getsize(txt)[1] < blank.size[1]):
    # iterate until the text size is just larger than the criteria
    fontsize += 1
    font = ImageFont.truetype(path_font, fontsize)
    # optionally de-increment to be sure it is less than criteria
fontsize -= 1
font = ImageFont.truetype(path_font, fontsize)
w, h = draw.textsize(txt, font=font)
print('final font size',fontsize)
draw.text(((W-w)/2,(H-h)/2), txt, font=font, fill="white") # put the text on the image
display(image)
(256, 256)
(256, 256)
final font size 5

However wee need resize the text into two lines

from PIL import Image, ImageDraw, ImageFont
import textwrap

def draw_multiple_line_text(image, text, font, text_color, text_start_height):
    '''
    From unutbu on [python PIL draw multiline text on image](https://stackoverflow.com/a/7698300/395857)
    '''
    draw = ImageDraw.Draw(image)
    image_width, image_height = image.size
    y_text = text_start_height
    lines = textwrap.wrap(text, width=40)
    for line in lines:
        line_width, line_height = font.getsize(line)
        draw.text(((image_width - line_width) / 2, y_text), 
                  line, font=font, fill=text_color)
        y_text += line_height
def add_text_to_img(text1,image_input):
    '''
    Testing draw_multiple_line_text
    '''
    #image_width
    #image = Image.new('RGB', (800, 600), color = (0, 0, 0))
    image =image_input
    fontsize = 13  # starting font size
    path_font="/usr/share/fonts/truetype/liberation/LiberationSans-Bold.ttf"
    font = ImageFont.truetype(path_font, fontsize)
    #text1 = "I try to add text at the bottom of image and actually I've done it, but in case of my text is longer then image width it is cut from both sides, to simplify I would like text to be in multiple lines if it is longer than image width."
    #text2 = "You could use textwrap.wrap to break text into a list of strings, each at most width characters long"
    text_color = (255,255,0)
    text_start_height = 200
    draw_multiple_line_text(image, text1, font, text_color, text_start_height)
    #draw_multiple_line_text(image, text2, font, text_color, 400)
    #image.save('pil_text.png')
    return image
    #display(image)
image = generated_images[2].copy()
add_subtitle=sub_names[2][0]
result=add_text_to_img(add_subtitle,image)
display(result)
print(add_subtitle)
 Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday

text size auto adjust to an image with PIL

generated_images_sub = []
for k in range(len(generated_images)): 
  image = generated_images[k].copy()
  text_to_add=sub_names[k][0]
  result=add_text_to_img(text_to_add,image)
  generated_images_sub.append(result)
  display(result)
  print(text_to_add, len(sub_names[k]))
 Ella went to the supermarket to buy the ingredients to make a cake 1
 Today is her birthday and her friends come to her house and help her to prepare the cake 1
 Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday 1

Step 5.1 - Creation of the Video

c = 0
file_names = []
for img in generated_images_sub:
  f_name = 'img_'+str(c)+'.jpg'
  file_names.append(f_name)
  img = img.save(f_name)
  c+=1
print(file_names)
clips = [ImageClip(m).set_duration(3)
        for m in file_names]

concat_clip = concatenate_videoclips(clips, method="compose")
concat_clip.write_videofile("result.mp4", fps=24)
['img_0.jpg', 'img_1.jpg', 'img_2.jpg']
[MoviePy] >>>> Building video result.mp4
[MoviePy] Writing video result.mp4
100%|█████████▉| 216/217 [00:00<00:00, 346.86it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: result.mp4 


Step 6 - Creation of the Video

from IPython.display import HTML
from base64 import b64encode
import os

# Input video path
save_path = "result.mp4"

# Compressed video path
compressed_path = "result_compressed.mp4"

os.system(f"ffmpeg -i {save_path} -vcodec libx264 {compressed_path}")

# Show video
mp4 = open(compressed_path,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=400 controls>
      <source src="%s" type="video/mp4">
</video>
""" % data_url)

Step 7 - Creation of audio

!pip install gTTS
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: gTTS in /usr/local/lib/python3.7/dist-packages (2.2.4)
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from gTTS) (7.1.2)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from gTTS) (2.23.0)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gTTS) (1.15.0)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->gTTS) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->gTTS) (2022.6.15)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->gTTS) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->gTTS) (1.24.3)
# to speech conversion
from gtts import gTTS
from IPython.display import Audio
from IPython.display import display
!pip install mutagen
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: mutagen in /usr/local/lib/python3.7/dist-packages (1.45.1)
mytext = sub_names[1][0]
# Language in which you want to convert
language = 'en'
# Passing the text and language to the engine,
# here we have marked slow=False. Which tells
# the module that the converted audio should
# have a high speed
myobj = gTTS(text=mytext, lang=language, slow=False)
# Saving the converted audio in a mp3 file named
sound_file="audio.mp3"
myobj.save(sound_file)
from mutagen.mp3 import MP3
audio = MP3("audio.mp3")
print(audio.info.length)
6.36
wn = Audio(sound_file, autoplay=True) ##
display(wn)##
from mutagen.mp3 import MP3
c = 0
mp3_names = []
mp3_lengths = []
for k in range(len(generated_images)):
    text_to_add=sub_names[k][0]
    print(text_to_add)
    f_name = 'audio_'+str(c)+'.mp3'
    mp3_names.append(f_name)
    # The text that you want to convert to audio
    mytext = text_to_add
    # Language in which you want to convert
    language = 'en'
    # Passing the text and language to the engine,
    # here we have marked slow=False. Which tells
    # the module that the converted audio should
    # have a high speed
    myobj = gTTS(text=mytext, lang=language, slow=False)
    # Saving the converted audio in a mp3 file named
    sound_file=f_name
    myobj.save(sound_file)
    audio = MP3(sound_file)
    duration=audio.info.length
    mp3_lengths.append(duration)
    print(audio.info.length)
    c+=1
print(mp3_names)
print(mp3_lengths)
 Ella went to the supermarket to buy the ingredients to make a cake
5.04
 Today is her birthday and her friends come to her house and help her to prepare the cake
6.336
 Ella's friends help her prepare her birthday cake and help prepare it for her friends' birthday
6.624
['audio_0.mp3', 'audio_1.mp3', 'audio_2.mp3']
[5.04, 6.336, 6.624]
wn = Audio(mp3_names[0], autoplay=True) ##
display(wn)##

Step 8 - Merge audio files

!zip archive.zip *.mp* *.jpg *.png
  adding: audio_0.mp3 (deflated 5%)
  adding: audio_1.mp3 (deflated 6%)
  adding: audio_2.mp3 (deflated 5%)
  adding: audio.mp3 (deflated 5%)
  adding: result_compressed.mp4 (deflated 7%)
  adding: result_compressed_new.mp4 (deflated 15%)
  adding: result_final_compressed.mp4 (deflated 10%)
  adding: result_final.mp4 (deflated 16%)
  adding: result.mp3 (deflated 10%)
  adding: result.mp4 (deflated 7%)
  adding: result_new.mp4 (deflated 15%)
  adding: __temp__.mp4 (deflated 19%)
  adding: img_0.jpg (deflated 1%)
  adding: img_1.jpg (deflated 1%)
  adding: img_2.jpg (deflated 1%)
  adding: generated.png (deflated 0%)
from pydub import AudioSegment
from os import getcwd
import glob
cwd = (getcwd()).replace(chr(92), '/')
#export_path = f'{cwd}/result.mp3'
export_path ='result.mp3'
MP3_FILES = glob.glob(pathname=f'{cwd}/*.mp3', recursive=True)
mp3_names
['audio_0.mp3', 'audio_1.mp3', 'audio_2.mp3']
silence = AudioSegment.silent(duration=500)
full_audio = AudioSegment.empty()    # this will accumulate the entire mp3 audios
for n, mp3_file in enumerate(mp3_names):
    mp3_file = mp3_file.replace(chr(92), '/')
    print(n, mp3_file)

    # Load the current mp3 into `audio_segment`
    audio_segment = AudioSegment.from_mp3(mp3_file)

    # Just accumulate the new `audio_segment` + `silence`
    full_audio += audio_segment + silence
    print('Merging ', n)

# The loop will exit once all files in the list have been used
# Then export    
full_audio.export(export_path, format='mp3')
print('\ndone!')
0 audio_0.mp3
Merging  0
1 audio_1.mp3
Merging  1
2 audio_2.mp3
Merging  2

done!
wn = Audio(export_path, autoplay=True) ##
display(wn)##

Step 9 - Creation of the video with adjusted times of the sound

c = 0
file_names = []
for img in generated_images_sub:
  f_name = 'img_'+str(c)+'.jpg'
  file_names.append(f_name)
  img = img.save(f_name)
  c+=1
print(file_names)
clips=[]
d=0
for m in file_names:
  duration=mp3_lengths[d]
  print(d,duration)
  clips.append(ImageClip(m).set_duration(duration+0.5))
  d+=1
concat_clip = concatenate_videoclips(clips, method="compose")
concat_clip.write_videofile("result_new.mp4", fps=24)
['img_0.jpg', 'img_1.jpg', 'img_2.jpg']
0 5.04
1 6.336
2 6.624
[MoviePy] >>>> Building video result_new.mp4
[MoviePy] Writing video result_new.mp4
100%|█████████▉| 468/469 [00:00<00:00, 552.53it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: result_new.mp4 

from IPython.display import HTML
from base64 import b64encode
import os

# Input video path
save_path = "result_new.mp4"

# Compressed video path
compressed_path = "result_compressed_new.mp4"

os.system(f"ffmpeg -i {save_path} -vcodec libx264 {compressed_path}")

# Show video
mp4 = open(compressed_path,'rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=400 controls>
      <source src="%s" type="video/mp4">
</video>
""" % data_url)

Step 10 - Merge Video + Audio

movie_name = 'result_new.mp4'
export_path='result.mp3'
movie_final= 'result_final.mp4'
from moviepy.editor import *
# loading video dsa gfg intro video
clip = VideoFileClip(movie_name)  
# getting duration of the video
duration = clip.duration
# printing duration
print("Duration : " + str(duration))
# showing final clip
clip.ipython_display()
Duration : 19.5
100%|█████████▉| 468/469 [00:00<00:00, 997.81it/s] 
def combine_audio(vidname, audname, outname, fps=60): 
    import moviepy.editor as mpe
    my_clip = mpe.VideoFileClip(vidname)
    audio_background = mpe.AudioFileClip(audname)
    final_clip = my_clip.set_audio(audio_background)
    final_clip.write_videofile(outname,fps=fps)
combine_audio(movie_name, export_path, movie_final) # i create a new file
[MoviePy] >>>> Building video result_final.mp4
[MoviePy] Writing audio in result_finalTEMP_MPY_wvf_snd.mp3
100%|██████████| 432/432 [00:00<00:00, 1072.43it/s]
[MoviePy] Done.
[MoviePy] Writing video result_final.mp4
100%|█████████▉| 1170/1171 [00:01<00:00, 645.62it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: result_final.mp4 


from IPython.display import HTML
from base64 import b64encode
import os
def compress_video(input_video):
    # Input video path
    save_path = input_video
    # Compressed video path
    compressed_path = save_path .replace(".mp4", "_compressed.mp4")
    print(compressed_path)
    os.system(f"ffmpeg -i {save_path} -vcodec libx264 {compressed_path}")
    # Show video
    mp4 = open(compressed_path,'rb').read()
    data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
    HTML("""
    <video width=400 controls>
          <source src="%s" type="video/mp4">
    </video>
    """ % data_url)
compress_video("result_final.mp4")
result_final_compressed.mp4
mp4 = open('result_final.mp4','rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=400 controls>
      <source src="%s" type="video/mp4">
</video>
""" % data_url)

Great!, now the next step is the creation of the Huggin Face enviroment.

Video Story Creator - Full single code

!nvidia-smi
!pip install min-dalle
!pip install gradio -q
!pip install transformers torch requests moviepy huggingface_hub opencv-python
!pip install moviepy
!pip install imageio-ffmpeg
!pip install imageio==2.4.1
!apt install imagemagick
!cat /etc/ImageMagick-6/policy.xml | sed 's/none/read,write/g'> /etc/ImageMagick-6/policy.xml
!pip install gTTS
!pip install mutagen
#We reset the runtime
exit()
Thu Sep 15 09:40:51 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03    Driver Version: 460.32.03    CUDA Version: 11.2     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:00:04.0 Off |                    0 |
| N/A   65C    P0    30W /  70W |   1892MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: min-dalle in /usr/local/lib/python3.7/dist-packages (0.4.11)
Requirement already satisfied: typing-extensions>=4.1 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (4.1.1)
Requirement already satisfied: torch>=1.11 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (1.12.1+cu113)
Requirement already satisfied: requests>=2.23 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (2.23.0)
Requirement already satisfied: pillow>=7.1 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (7.1.2)
Requirement already satisfied: emoji in /usr/local/lib/python3.7/dist-packages (from min-dalle) (2.0.0)
Requirement already satisfied: numpy>=1.21 in /usr/local/lib/python3.7/dist-packages (from min-dalle) (1.21.6)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests>=2.23->min-dalle) (2022.6.15)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests>=2.23->min-dalle) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests>=2.23->min-dalle) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests>=2.23->min-dalle) (1.24.3)
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: transformers in /usr/local/lib/python3.7/dist-packages (4.22.0)
Requirement already satisfied: torch in /usr/local/lib/python3.7/dist-packages (1.12.1+cu113)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (2.23.0)
Requirement already satisfied: moviepy in /usr/local/lib/python3.7/dist-packages (0.2.3.5)
Requirement already satisfied: huggingface_hub in /usr/local/lib/python3.7/dist-packages (0.9.1)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (4.6.0.66)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.7/dist-packages (from transformers) (6.0)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (2022.6.2)
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers) (3.8.0)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers) (4.64.1)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from transformers) (21.3)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers) (1.21.6)
Requirement already satisfied: tokenizers!=0.11.3,<0.13,>=0.11.1 in /usr/local/lib/python3.7/dist-packages (from transformers) (0.12.1)
Requirement already satisfied: importlib-metadata in /usr/local/lib/python3.7/dist-packages (from transformers) (4.12.0)
Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from huggingface_hub) (4.1.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=20.0->transformers) (3.0.9)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests) (2022.6.15)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests) (2.10)
Requirement already satisfied: imageio<3.0,>=2.1.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (2.4.1)
Requirement already satisfied: decorator<5.0,>=4.0.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (4.4.2)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from imageio<3.0,>=2.1.2->moviepy) (7.1.2)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata->transformers) (3.8.1)
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: moviepy in /usr/local/lib/python3.7/dist-packages (0.2.3.5)
Requirement already satisfied: imageio<3.0,>=2.1.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (2.4.1)
Requirement already satisfied: tqdm<5.0,>=4.11.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (4.64.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from moviepy) (1.21.6)
Requirement already satisfied: decorator<5.0,>=4.0.2 in /usr/local/lib/python3.7/dist-packages (from moviepy) (4.4.2)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from imageio<3.0,>=2.1.2->moviepy) (7.1.2)
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: imageio-ffmpeg in /usr/local/lib/python3.7/dist-packages (0.4.7)
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: imageio==2.4.1 in /usr/local/lib/python3.7/dist-packages (2.4.1)
Requirement already satisfied: pillow in /usr/local/lib/python3.7/dist-packages (from imageio==2.4.1) (7.1.2)
Requirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from imageio==2.4.1) (1.21.6)
Reading package lists... Done
Building dependency tree       
Reading state information... Done
imagemagick is already the newest version (8:6.9.7.4+dfsg-16ubuntu6.13).
The following package was automatically installed and is no longer required:
  libnvidia-common-460
Use 'apt autoremove' to remove it.
0 upgraded, 0 newly installed, 0 to remove and 32 not upgraded.
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: gTTS in /usr/local/lib/python3.7/dist-packages (2.2.4)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from gTTS) (2.23.0)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from gTTS) (1.15.0)
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from gTTS) (7.1.2)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->gTTS) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->gTTS) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->gTTS) (2022.6.15)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->gTTS) (2.10)
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Requirement already satisfied: mutagen in /usr/local/lib/python3.7/dist-packages (1.45.1)
from moviepy.editor import *
from PIL import Image
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM,pipeline
import requests
import gradio as gr
import torch
import re
import os
import sys
from huggingface_hub import snapshot_download
import base64
import io
import cv2
import argparse
import os
from PIL import Image
from min_dalle import MinDalle
import torch
from PIL import Image, ImageDraw, ImageFont
import textwrap
from mutagen.mp3 import MP3
# Import the required module for text
# to speech conversion
from gtts import gTTS
from IPython.display import Audio
from IPython.display import display
from pydub import AudioSegment
from os import getcwd
import glob
import nltk
from IPython.display import HTML
from base64 import b64encode
nltk.download('punkt')
# Libraries
tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-cnn-12-6")
model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distilbart-cnn-12-6")

# Step 3- Creation of the application
text ='Once, there was a girl called Laura who went to the supermarket to buy the ingredients to make a cake. Because today is her birthday and her friends come to her house and help her to prepare the cake.'
inputs = tokenizer(text, 
                  max_length=1024, 
                  truncation=True,
                  return_tensors="pt")
  
summary_ids = model.generate(inputs["input_ids"])
summary = tokenizer.batch_decode(summary_ids, 
                                skip_special_tokens=True, 
                                clean_up_tokenization_spaces=False)
plot = list(summary[0].split('.'))
def save_image(image: Image.Image, path: str):
    if os.path.isdir(path):
        path = os.path.join(path, 'generated.png')
    elif not path.endswith('.png'):
        path += '.png'
    print("saving image to", path)
    image.save(path)
    return image
def generate_image(
    is_mega: bool,
    text: str,
    seed: int,
    grid_size: int,
    top_k: int,
    image_path: str,
    models_root: str,
    fp16: bool,
):
    model = MinDalle(
        is_mega=is_mega, 
        models_root=models_root,
        is_reusable=False,
        is_verbose=True,
        dtype=torch.float16 if fp16 else torch.float32
    )

    image = model.generate_image(
        text, 
        seed, 
        grid_size, 
        top_k=top_k, 
        is_verbose=True
    )
    #save_image(image, image_path)
    #image = Image.open("generated.png")
    return image 

#Let us generate the images from our summary text
generated_images = []
for senten in plot[:-1]:
  #print(senten)
  image=generate_image(
    is_mega='store_true',
    text=senten,
    seed=1,
    grid_size=1,
    top_k=256,
    image_path='generated',
    models_root='pretrained',
    fp16=256,)
  #display(image)
  generated_images.append(image)

# Step 4- Creation of the subtitles
sentences =plot[:-1]
num_sentences=len(sentences)
assert len(generated_images) == len(sentences) , print('Something is wrong')
#We can generate our list of subtitles
from nltk import tokenize
c = 0
sub_names = []
for k in range(len(generated_images)): 
  subtitles=tokenize.sent_tokenize(sentences[k])
  sub_names.append(subtitles)
  #print(subtitles, len(subtitles))
  #!ls  /usr/share/fonts/truetype/liberation
# Step 5- Adding Subtitles to the Images
def draw_multiple_line_text(image, text, font, text_color, text_start_height):
    draw = ImageDraw.Draw(image)
    image_width, image_height = image.size
    y_text = text_start_height
    lines = textwrap.wrap(text, width=40)
    for line in lines:
        line_width, line_height = font.getsize(line)
        draw.text(((image_width - line_width) / 2, y_text), 
                  line, font=font, fill=text_color)
        y_text += line_height

def add_text_to_img(text1,image_input):
    '''
    Testing draw_multiple_line_text
    '''
    image =image_input
    fontsize = 13  # starting font size
    path_font="/usr/share/fonts/truetype/liberation/LiberationSans-Bold.ttf"
    font = ImageFont.truetype(path_font, fontsize)
    text_color = (255,255,0)
    text_start_height = 200
    draw_multiple_line_text(image, text1, font, text_color, text_start_height)
    return image

# Testing
#for k in range(len(generated_images)):
#    display(generated_images[k])
#    print(sentences[k])

generated_images_sub = []
for k in range(len(generated_images)): 
  imagenes = generated_images[k].copy()
  text_to_add=sub_names[k][0]
  result=add_text_to_img(text_to_add,imagenes)
  generated_images_sub.append(result)
  #display(result)
  #print(text_to_add, len(sub_names[k]))


# Step  7 - Creation of audio 
c = 0
mp3_names = []
mp3_lengths = []
for k in range(len(generated_images)):
    text_to_add=sub_names[k][0]
    print(text_to_add)
    f_name = 'audio_'+str(c)+'.mp3'
    mp3_names.append(f_name)
    # The text that you want to convert to audio
    mytext = text_to_add
    # Language in which you want to convert
    language = 'en'
    # Passing the text and language to the engine,
    # here we have marked slow=False. Which tells
    # the module that the converted audio should
    # have a high speed
    myobj = gTTS(text=mytext, lang=language, slow=False)
    # Saving the converted audio in a mp3 file named
    sound_file=f_name
    myobj.save(sound_file)
    audio = MP3(sound_file)
    duration=audio.info.length
    mp3_lengths.append(duration)
    print(audio.info.length)
    c+=1
#print(mp3_names)
#print(mp3_lengths)

# Step 8 - Merge audio files
cwd = (getcwd()).replace(chr(92), '/')
#export_path = f'{cwd}/result.mp3'
export_path ='result.mp3'
MP3_FILES = glob.glob(pathname=f'{cwd}/*.mp3', recursive=True)
silence = AudioSegment.silent(duration=500)
full_audio = AudioSegment.empty()    # this will accumulate the entire mp3 audios
for n, mp3_file in enumerate(mp3_names):
    mp3_file = mp3_file.replace(chr(92), '/')
    print(n, mp3_file)

    # Load the current mp3 into `audio_segment`
    audio_segment = AudioSegment.from_mp3(mp3_file)

    # Just accumulate the new `audio_segment` + `silence`
    full_audio += audio_segment + silence
    print('Merging ', n)

# The loop will exit once all files in the list have been used
# Then export    
full_audio.export(export_path, format='mp3')
print('\ndone!')

# Step 9 - Creation of the video with adjusted times of the sound
c = 0
file_names = []
for img in generated_images_sub:
  f_name = 'img_'+str(c)+'.jpg'
  file_names.append(f_name)
  img = img.save(f_name)
  c+=1
print(file_names)
clips=[]
d=0
for m in file_names:
  duration=mp3_lengths[d]
  print(d,duration)
  clips.append(ImageClip(m).set_duration(duration+0.5))
  d+=1
concat_clip = concatenate_videoclips(clips, method="compose")
concat_clip.write_videofile("result_new.mp4", fps=24)

# Step 10 - Merge Video + Audio
movie_name = 'result_new.mp4'
export_path='result.mp3'
movie_final= 'result_final.mp4'

def combine_audio(vidname, audname, outname, fps=60): 
    import moviepy.editor as mpe
    my_clip = mpe.VideoFileClip(vidname)
    audio_background = mpe.AudioFileClip(audname)
    final_clip = my_clip.set_audio(audio_background)
    final_clip.write_videofile(outname,fps=fps)

combine_audio(movie_name, export_path, movie_final) # i create a new file
Imageio: 'ffmpeg-linux64-v3.3.1' was not found on your computer; downloading it now.
Try 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg-linux64-v3.3.1 (43.8 MB)
Downloading: 8192/45929032 bytes (0.0%)2703360/45929032 bytes (5.9%)4751360/45929032 bytes (10.3%)7266304/45929032 bytes (15.8%)8609792/45929032 bytes (18.7%)10608640/45929032 bytes (23.1%)12509184/45929032 bytes (27.2%)14745600/45929032 bytes (32.1%)17154048/45929032 bytes (37.3%)19456000/45929032 bytes (42.4%)22216704/45929032 bytes (48.4%)24281088/45929032 bytes (52.9%)26697728/45929032 bytes (58.1%)28704768/45929032 bytes (62.5%)30474240/45929032 bytes (66.4%)32858112/45929032 bytes (71.5%)34562048/45929032 bytes (75.3%)36864000/45929032 bytes (80.3%)39092224/45929032 bytes (85.1%)40894464/45929032 bytes (89.0%)42811392/45929032 bytes (93.2%)45719552/45929032 bytes (99.5%)45929032/45929032 bytes (100.0%)
  Done
File saved as /root/.imageio/ffmpeg/ffmpeg-linux64-v3.3.1.
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data]   Unzipping tokenizers/punkt.zip.
WARNING:py.warnings:/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py:1207: UserWarning: Neither `max_length` nor `max_new_tokens` have been set, `max_length` will default to 142 (`self.config.max_length`). Controlling `max_length` via the config is deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
  UserWarning,

using device cuda
downloading tokenizer params
intializing TextTokenizer
tokenizing text
['Ġonce', ',']
['Ġlaura']
['Ġwent']
['Ġto']
['Ġthe']
['Ġsupermarket']
['Ġto']
['Ġbuy']
['Ġthe']
['Ġingredients']
['Ġto']
['Ġmake']
['Ġa']
['Ġcake']
17 text tokens [0, 6619, 11, 7309, 8398, 123, 99, 12553, 123, 403, 99, 13241, 123, 1077, 58, 2354, 2]
downloading encoder params
initializing DalleBartEncoder
encoding text tokens
downloading decoder params
initializing DalleBartDecoder
downloading detokenizer params
initializing VQGanDetokenizer
detokenizing image
using device cuda
intializing TextTokenizer
tokenizing text
['Ġher']
['Ġfriends']
['Ġcome']
['Ġto']
['Ġher']
['Ġhouse']
['Ġand']
['Ġhelp']
['Ġher']
['Ġto']
['Ġprepare']
['Ġthe']
['Ġcake']
15 text tokens [0, 447, 3103, 4118, 123, 447, 610, 128, 1980, 447, 123, 11147, 99, 2354, 2]
initializing DalleBartEncoder
encoding text tokens
initializing DalleBartDecoder
initializing VQGanDetokenizer
detokenizing image
using device cuda
intializing TextTokenizer
tokenizing text
['Ġbecause']
['Ġtoday']
['Ġis']
['Ġher']
['Ġbirthday', ',']
['Ġshe']
['Ġhas']
['Ġfriends']
['Ġhelp']
['Ġher']
['Ġprepare']
['Ġit']
['Ġfor']
['Ġher']
['Ġbirthday']
18 text tokens [0, 6177, 1535, 231, 447, 1249, 11, 748, 1238, 3103, 1980, 447, 11147, 353, 129, 447, 1249, 2]
initializing DalleBartEncoder
encoding text tokens
initializing DalleBartDecoder
initializing VQGanDetokenizer
detokenizing image
using device cuda
intializing TextTokenizer
tokenizing text
['Ġlaura', "'s"]
['Ġfriends']
['Ġcome']
['Ġand']
['Ġhelp']
['Ġprepare']
['Ġher']
['Ġto']
['Ġmake']
['Ġher']
['Ġbirthday']
['Ġcake']
15 text tokens [0, 7309, 168, 3103, 4118, 128, 1980, 11147, 447, 123, 1077, 447, 1249, 2354, 2]
initializing DalleBartEncoder
encoding text tokens
initializing DalleBartDecoder
initializing VQGanDetokenizer
detokenizing image
 Once, Laura went to the supermarket to buy the ingredients to make a cake
5.856
 Her friends come to her house and help her to prepare the cake
4.584
 Because today is her birthday, she has friends help her prepare it for her birthday
6.12
 Laura's friends come and help prepare her to make her birthday cake
4.968
0 audio_0.mp3
Merging  0
1 audio_1.mp3
Merging  1
2 audio_2.mp3
Merging  2
3 audio_3.mp3
Merging  3

done!
['img_0.jpg', 'img_1.jpg', 'img_2.jpg', 'img_3.jpg']
0 5.856
1 4.584
2 6.12
3 4.968
[MoviePy] >>>> Building video result_new.mp4
[MoviePy] Writing video result_new.mp4
100%|██████████| 565/565 [00:01<00:00, 416.79it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: result_new.mp4 


[MoviePy] >>>> Building video result_final.mp4
[MoviePy] Writing audio in result_finalTEMP_MPY_wvf_snd.mp3
100%|██████████| 521/521 [00:00<00:00, 970.26it/s]
[MoviePy] Done.
[MoviePy] Writing video result_final.mp4
100%|██████████| 1413/1413 [00:02<00:00, 616.09it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: result_final.mp4 


from IPython.display import HTML
from base64 import b64encode
# Show video
mp4 = open('result_final.mp4','rb').read()
data_url = "data:video/mp4;base64," + b64encode(mp4).decode()
HTML("""
<video width=400 controls>
      <source src="%s" type="video/mp4">
</video>
""" % data_url)